skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Wazzan, Albatool"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Explanations have increasingly been incorporated into intelligent systems to offer insights into the underlying AI models. In this paper, we investigate the impact of AI-generated visual explanations on users’ decision-making processes during an image matching task. Our work examines how these explanations affect correctness, timing, and confidence and explores the role of AI literacy in user behavior. We conducted a mixed-methods user study with 54 participants who were tasked to identify hotels from images using a specialized intelligent system. Participants were randomly assigned to use the system with or without visual explanation capabilities. Results showed that visual explanations did not affect the accuracy of the decision or the confidence of the user in image matching tasks. Participants with high-AI literacy outperformed those with lower literacy, but engaged less with explanations. Distinct matching strategies emerged between high-AI and low-AI participants, with high-AI participants systematically examining high-ranked images and using the explanation for verification purposes, while low-AI participants followed more exhaustive approaches. 
    more » « less